ContextCapture User Guide

Principle

ContextCapture takes as input a set of digital photographs of a static subject, taken from different viewpoints.

Various additional input data may be provided: camera properties (focal length, sensor size, principal point, lens distortion), positions of photos (GPS), rotations of photos (INS), control points, ...

Without manual intervention and within a few minutes/hours of computation time depending on the size of the input data, ContextCapture outputs a high resolution textured triangular mesh.

The output 3D mesh constitutes an accurate visual and geometric approximation of the parts of the subject adequately covered by input photographs.

Suitable Subjects

ContextCapture 's versatility allows to seamlessly reconstruct subjects of various scales, ranging from centimeters to kilometers, photographed from the ground or from the air. There is no limit in the precision of the resulting 3D model, other than the resolution of the input photographs.

ContextCapture performs best for geometrically-complex textured matte surfaces, including but not limited to buildings, terrain and vegetation.

Surfaces with no color variation (e.g. solid color walls/floors/ceilings), or with reflective, glossy, transparent or refractive materials (e.g. glass, metal, plastic, water, and to a lesser extent, skin) may cause holes, bumps or noise in the generated 3D model.

ContextCapture is intended for static subjects. Moving objects (people, vehicles, animals), when not dominant, can be handled at the price of occasional artifacts in the generated 3D model. Human and animal subjects should remain still during acquisition, or should be photographed with multiple synchronized cameras.